33 research outputs found

    Using LSTM recurrent neural networks for monitoring the LHC superconducting magnets

    Full text link
    The superconducting LHC magnets are coupled with an electronic monitoring system which records and analyses voltage time series reflecting their performance. A currently used system is based on a range of preprogrammed triggers which launches protection procedures when a misbehavior of the magnets is detected. All the procedures used in the protection equipment were designed and implemented according to known working scenarios of the system and are updated and monitored by human operators. This paper proposes a novel approach to monitoring and fault protection of the Large Hadron Collider (LHC) superconducting magnets which employs state-of-the-art Deep Learning algorithms. Consequently, the authors of the paper decided to examine the performance of LSTM recurrent neural networks for modeling of voltage time series of the magnets. In order to address this challenging task different network architectures and hyper-parameters were used to achieve the best possible performance of the solution. The regression results were measured in terms of RMSE for different number of future steps and history length taken into account for the prediction. The best result of RMSE=0.00104 was obtained for a network of 128 LSTM cells within the internal layer and 16 steps history buffer

    Assessing Dataset Quality Through Decision Tree Characteristics in Autoencoder-Processed Spaces

    Full text link
    In this paper, we delve into the critical aspect of dataset quality assessment in machine learning classification tasks. Leveraging a variety of nine distinct datasets, each crafted for classification tasks with varying complexity levels, we illustrate the profound impact of dataset quality on model training and performance. We further introduce two additional datasets designed to represent specific data conditions - one maximizing entropy and the other demonstrating high redundancy. Our findings underscore the importance of appropriate feature selection, adequate data volume, and data quality in achieving high-performing machine learning models. To aid researchers and practitioners, we propose a comprehensive framework for dataset quality assessment, which can help evaluate if the dataset at hand is sufficient and of the required quality for specific tasks. This research offers valuable insights into data assessment practices, contributing to the development of more accurate and robust machine learning models

    The model of an anomaly detector for HiLumi LHC magnets based on Recurrent Neural Networks and adaptive quantization

    Full text link
    This paper focuses on an examination of an applicability of Recurrent Neural Network models for detecting anomalous behavior of the CERN superconducting magnets. In order to conduct the experiments, the authors designed and implemented an adaptive signal quantization algorithm and a custom GRU-based detector and developed a method for the detector parameters selection. Three different datasets were used for testing the detector. Two artificially generated datasets were used to assess the raw performance of the system whereas the 231 MB dataset composed of the signals acquired from HiLumi magnets was intended for real-life experiments and model training. Several different setups of the developed anomaly detection system were evaluated and compared with state-of-the-art OC-SVM reference model operating on the same data. The OC-SVM model was equipped with a rich set of feature extractors accounting for a range of the input signal properties. It was determined in the course of the experiments that the detector, along with its supporting design methodology, reaches F1 equal or very close to 1 for almost all test sets. Due to the profile of the data, the best_length setup of the detector turned out to perform the best among all five tested configuration schemes of the detection system. The quantization parameters have the biggest impact on the overall performance of the detector with the best values of input/output grid equal to 16 and 8, respectively. The proposed solution of the detection significantly outperformed OC-SVM-based detector in most of the cases, with much more stable performance across all the datasets.Comment: Related to arXiv:1702.0083

    Highly Efficient Twin Module Structure of 64-Bit Exponential Function Implemented on SGI RASC Platform

    Get PDF
    This paper presents an implementation of the double precision exponential function. A novel table-based architecture, together with short Taylor expansion, provides a low latency (30 clock cycles) which is comparable to 32 bit implementations. A low area consumption of a single exp() module (roughtly 4% of XC4LX200) allows that several modules can be implemented in a single FPGAs.The employment of massive parallelism results in high performance of the module. Nevertheless, because of the external memory interface limitation, only a twin module structure is presented in this paper. This implementation aims primarily to meet quantum chemistry huge and strict requirements for precision and speed. Each module is capable of processing at speed of 200MHz with max. error of 1 ulp, RMSE equals 0.6

    FPGA Implementation of Procedures for Video Quality Assessment

    Get PDF
    Video resolutions used in a variety of media are constantly rising. While manufacturers struggle to perfect their screens, it is also important to ensure high quality of displayed image. Overall quality can be measured using Mean Opinion Score (MOS). Video quality can be aected by miscellaneous artifacts, appearing at every stage of video creation and transmission. In this paper, we present a solution to calculate four distinct video quality metrics that can be applied to a real-time video quality assessment system. Our assessment module is capable of processing 8K resolution in real time set at the level of 30 frames per second. The throughput of 2.19 GB/s surpasses the performance of pure software solutions. The module was created using a high-level language to concentrate on the architectural optimization

    Predictive maintenance of induction motors using ultra-low power wireless sensors and compressed recurrent neural networks

    Get PDF
    In real-world applications - to minimize the impact of failures - machinery is often monitored by various sensors. Their role comes down to acquiring data and sending it to a more powerful entity, such as an embedded computer or cloud server. There have been attempts to reduce the computational effort related to data processing in order to use edge computing for predictive maintenance. The aim of this paper is to push the boundaries even further by proposing a novel architecture, in which processing is moved to the sensors themselves thanks to decrease of computational complexity given by the usage of compressed recurrent neural networks. A sensor processes data locally, and then wirelessly sends only a single packet with the probability that the machine is working incorrectly. We show that local processing of the data on ultra-low power wireless sensors gives comparable outcomes in terms of accuracy but much better results in terms of energy consumption that transferring of the raw data. The proposed ultra-low power hardware and firmware architecture makes it possible to use sensors powered by harvested energy while maintaining high confidentiality levels of the failure prediction previously offered by more powerful mains-powered computational platforms
    corecore